I am looking forward to learn a lot about machine learning and R during this course. My GitHUb is https://github.com/hhelskya/IODS-project
# This is a so-called "R chunk" where you can write R code.
date()
## [1] "Wed Nov 25 14:34:38 2020"
Cannot wait to learn more.
Describe the work this week and summarize your learning.
date()
## [1] "Wed Nov 25 14:34:38 2020"
ds <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\learning2014.csv", header=TRUE)
ds$Points
## [1] 25 12 24 10 22 21 21 31 24 26 31 31 23 25 21 31 20 22 9 24 28 30 24 9 26
## [26] 32 32 33 29 30 19 23 19 12 10 11 20 26 31 20 23 12 24 17 29 23 28 31 23 25
## [51] 18 19 22 25 21 9 28 25 29 33 33 25 18 22 17 25 28 22 26 11 29 22 21 28 33
## [76] 16 31 22 31 23 26 12 26 31 19 30 12 17 18 19 21 24 28 17 18 17 23 26 28 31
## [101] 27 25 23 21 27 28 23 21 25 11 19 24 28 21 24 24 20 19 30 22 16 16 19 30 23
## [126] 19 18 28 21 19 27 24 21 20 28 12 21 28 31 18 25 19 21 16 7 21 17 22 18 25
## [151] 24 23 23 26 12 32 22 20 21 23 20 28 31 18 30 19
dim(ds)
## [1] 166 7
str(ds)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 1.5 1.67 1.5 2.17 1.83 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
summary(ds)
## gender Age attitude deep
## Length:166 Min. :17.00 Min. :1.000 Min. :1.583
## Class :character 1st Qu.:21.00 1st Qu.:1.500 1st Qu.:3.333
## Mode :character Median :22.00 Median :1.667 Median :3.667
## Mean :25.51 Mean :1.883 Mean :3.680
## 3rd Qu.:27.00 3rd Qu.:2.000 3rd Qu.:4.083
## Max. :55.00 Max. :4.667 Max. :4.917
## stra surf Points
## Min. :1.250 Min. :1.583 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:2.417 1st Qu.:19.00
## Median :3.188 Median :2.833 Median :23.00
## Mean :3.121 Mean :2.787 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :5.000 Max. :4.333 Max. :33.00
The dataset contains 166 rows and 7 columns. It includes the gender (F/M), the age, the points and some combination variables built using mean. These variables and the original variables they are combined are:
attitude: Aa, Ab, Ac, Ad, Ae, Af deep: D03+D11+D19+D27, D07+D14+D22+D30, D06+D15+D23+D31 surf: SU02+SU10+SU18+SU26, SU05+SU13+SU21+SU29, SU08+SU16+SU24+SU32 stra: ST01+ST09+ST17+ST25, ST04+ST12+ST20+ST28
Gender is of type chr, age and points are of type int and the rest of the variables are of type num as shown here: ‘data.frame’: 166 obs. of 7 variables: $ X.gender. : chr “"F"” “"M"” “"F"” “"M"” … $ X.Age. : int 53 55 49 53 49 38 50 37 37 42 … $ X.attitude.: num 1.5 1.67 1.5 2.17 1.83 … $ X.deep. : num 3.58 2.92 3.5 3.5 3.67 … $ X.stra. : num 3.38 2.75 3.62 3.12 3.62 … $ X.surf. : num 2.58 3.17 2.25 2.25 2.83 … $ X.Points. : int 25 12 24 10 22 21 21 31 24 26 …
The minimum age in the dataset is 17, maximum 55. Values for attitude are between 1.000-4.667, for deep between 1.583-4.917, stra 1.250-5.000, and surf 1.583-4.333. The minimum points are 7.00 and the maximum points are 33.00. The table below shows also the 1st quadrant, median, mean, and 3. quadrant for each variable.
X.gender. X.Age. X.attitude. X.deep. X.stra. X.surf.
Length:166 Min. :17.00 Min. :1.000 Min. :1.583 Min. :1.250 Min. :1.583
Class :character 1st Qu.:21.00 1st Qu.:1.500 1st Qu.:3.333 1st Qu.:2.625 1st Qu.:2.417
Mode :character Median :22.00 Median :1.667 Median :3.667 Median :3.188 Median :2.833
Mean :25.51 Mean :1.883 Mean :3.680 Mean :3.121 Mean :2.787
3rd Qu.:27.00 3rd Qu.:2.000 3rd Qu.:4.083 3rd Qu.:3.625 3rd Qu.:3.167
Max. :55.00 Max. :4.667 Max. :4.917 Max. :5.000 Max. :4.333
X.Points.
Min. : 7.00
1st Qu.:19.00
Median :23.00
Mean :22.72
3rd Qu.:27.75
Max. :33.00
pairs(ds[-1])
The scatter plot above describes the relationships between the variables. We have removed gender from the scatter plot.
library(GGally)
## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
library(ggplot2)
p <- ggpairs(ds, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p
Above you can see a more advanced plot describing for instance the correlation of different variables to each others and the distribuiton of each variable.
# a scatter plot of points versus attitude
library(ggplot2)
# colnames(learning2014)[7] <- "points"
qplot(attitude, Points, data = ds) + geom_smooth(method = "lm")
## `geom_smooth()` using formula 'y ~ x'
my_model <- lm(Points ~ attitude + deep + Age, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ attitude + deep + Age, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.0562 -3.7634 0.2952 4.6517 10.7479
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 25.26830 3.69183 6.844 1.5e-10 ***
## attitude -0.19559 0.61906 -0.316 0.752
## deep -0.10027 0.83390 -0.120 0.904
## Age -0.07111 0.05940 -1.197 0.233
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.921 on 162 degrees of freedom
## Multiple R-squared: 0.009356, Adjusted R-squared: -0.008989
## F-statistic: 0.51 on 3 and 162 DF, p-value: 0.6759
Residuals explain the minimum and maximum values that are -16.0562 and 10.7479. It also shows that the median is 0.2952, the first quatrain is -3.7634 and the third is 4.6517. The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Age seems to have the biggest difference (-1.197) but still not big enough to refute the 0-hypotheses. The t-value cannot refute null hypotheses, not statistically significant. p-value (Pr) is smallest with Age (0.233) but still bigger than 0.05 so none of these are statistically significant. Based on the results, null hypotheses cannot be refute. Residual standard error is 5.921. R-squared values indicate explain how well the variance is explained by the model. Multiple R-squared (0.009356) and Adjusted R-squared (-0.008989) are almost the same and explaind the variant very poorely.
my_model <- lm(Points ~ stra + surf +gender, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ stra + surf + gender, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.2430 -3.4525 0.3105 4.2753 10.2382
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 22.2924 3.4464 6.468 1.12e-09 ***
## stra 1.0936 0.6022 1.816 0.0712 .
## surf -1.2249 0.8752 -1.400 0.1635
## genderM 1.2599 0.9736 1.294 0.1974
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.81 on 162 degrees of freedom
## Multiple R-squared: 0.0462, Adjusted R-squared: 0.02854
## F-statistic: 2.616 on 3 and 162 DF, p-value: 0.05295
confint(my_model)
## 2.5 % 97.5 %
## (Intercept) 15.48661602 29.098118
## stra -0.09564338 2.282801
## surf -2.95303542 0.503321
## genderM -0.66254961 3.182448
my_model <- lm(Points ~ attitude, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ attitude, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.7533 -3.7392 0.2186 4.9615 10.3311
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 23.0346 1.2479 18.459 <2e-16 ***
## attitude -0.1688 0.6165 -0.274 0.785
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.911 on 164 degrees of freedom
## Multiple R-squared: 0.0004569, Adjusted R-squared: -0.005638
## F-statistic: 0.07496 on 1 and 164 DF, p-value: 0.7846
NOt significant
my_model <- lm(Points ~ deep, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ deep, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.6913 -3.6935 0.2862 4.9957 10.3537
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 23.1141 3.0908 7.478 4.31e-12 ***
## deep -0.1080 0.8306 -0.130 0.897
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.913 on 164 degrees of freedom
## Multiple R-squared: 0.000103, Adjusted R-squared: -0.005994
## F-statistic: 0.01689 on 1 and 164 DF, p-value: 0.8967
Not significant
my_model <- lm(Points ~ Age, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ Age, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.0360 -3.7531 0.0958 4.6762 10.8128
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 24.52150 1.57339 15.585 <2e-16 ***
## Age -0.07074 0.05901 -1.199 0.232
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.887 on 164 degrees of freedom
## Multiple R-squared: 0.008684, Adjusted R-squared: 0.00264
## F-statistic: 1.437 on 1 and 164 DF, p-value: 0.2324
Not significant
my_model <- lm(Points ~ stra, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ stra, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.5581 -3.8198 0.1042 4.3024 10.1394
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 19.233 1.897 10.141 <2e-16 ***
## stra 1.116 0.590 1.892 0.0603 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.849 on 164 degrees of freedom
## Multiple R-squared: 0.02135, Adjusted R-squared: 0.01538
## F-statistic: 3.578 on 1 and 164 DF, p-value: 0.06031
Not significant
my_model <- lm(Points ~ surf, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ surf, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -14.6539 -3.3744 0.3574 4.4734 10.2234
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.2017 2.4432 11.134 <2e-16 ***
## surf -1.6091 0.8613 -1.868 0.0635 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.851 on 164 degrees of freedom
## Multiple R-squared: 0.02084, Adjusted R-squared: 0.01487
## F-statistic: 3.49 on 1 and 164 DF, p-value: 0.06351
Not significant
my_model <- lm(Points ~ gender, data = ds )
summary(my_model)
##
## Call:
## lm(formula = Points ~ gender, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.3273 -3.3273 0.5179 4.5179 10.6727
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 22.3273 0.5613 39.776 <2e-16 ***
## genderM 1.1549 0.9664 1.195 0.234
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.887 on 164 degrees of freedom
## Multiple R-squared: 0.008632, Adjusted R-squared: 0.002587
## F-statistic: 1.428 on 1 and 164 DF, p-value: 0.2338
Not significant. We choose stra for further investigation. (biggest t-value)
my_model <- lm(Points ~ stra, data = ds )
plot(ds$stra,ds$Points)
abline(my_model, col="red")
my_model
##
## Call:
## lm(formula = Points ~ stra, data = ds)
##
## Coefficients:
## (Intercept) stra
## 19.234 1.116
summary(my_model)
##
## Call:
## lm(formula = Points ~ stra, data = ds)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.5581 -3.8198 0.1042 4.3024 10.1394
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 19.233 1.897 10.141 <2e-16 ***
## stra 1.116 0.590 1.892 0.0603 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.849 on 164 degrees of freedom
## Multiple R-squared: 0.02135, Adjusted R-squared: 0.01538
## F-statistic: 3.578 on 1 and 164 DF, p-value: 0.06031
qqnorm(ds$Points, pch = 1, frame = FALSE)
qqline(ds$Points, col = "steelblue", lwd = 2)
plot(lm(Points~stra,data=ds))
The assumption is that the strategic approach (stra) defines the overall points (Points): Points is modelled as a linear combination of stra. Residual is the difference between an observed value of the response variable and the fitted value, the error. Residuals can be used to define the validity of the model assumptions. There are several assumptions for the errors. First of them is that they are normally distributed. QQ-plot of the residuals is a method to explore the assumption that the errors of the model are normally distributed. The better the data points aline with the line the better they are normally distributed. In our example QQ-plot the beginning and end of the line do not follow but in the middle the data point are quite well following the line. We could say the errors are well fitting the line with values -1 and 1.5, reasonably well with values less than -1, not so well fitting with values larger than 1.5. Therefore the errors are reasonable well normally distributed. The second assumption is the constant variance of errors, the size of the errors is not dependent on the explanatory variables. This can be explored with a scatter plot of residuals versus model predictions. Any patter in the scatter plot implies that there is a problem with this assumption. In our example there is no patter to be found and therefore this assumption is correct. Leverage is used to measure how much impact an observation has to the model. Residuals vs leverage plot can be used to find observations that have unusually high impact, the outliers. In our example there are no outliers.
date()
## [1] "Wed Nov 25 14:34:48 2020"
alc <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\alc.csv",sep=",", header=TRUE)
colnames(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
dim(alc)
## [1] 382 35
str(alc)
## 'data.frame': 382 obs. of 35 variables:
## $ school : chr "GP" "GP" "GP" "GP" ...
## $ sex : chr "F" "F" "F" "F" ...
## $ age : int 18 17 15 15 16 16 16 17 15 15 ...
## $ address : chr "U" "U" "U" "U" ...
## $ famsize : chr "GT3" "GT3" "LE3" "GT3" ...
## $ Pstatus : chr "A" "T" "T" "T" ...
## $ Medu : int 4 1 1 4 3 4 2 4 3 3 ...
## $ Fedu : int 4 1 1 2 3 3 2 4 2 4 ...
## $ Mjob : chr "at_home" "at_home" "at_home" "health" ...
## $ Fjob : chr "teacher" "other" "other" "services" ...
## $ reason : chr "course" "course" "other" "home" ...
## $ nursery : chr "yes" "no" "yes" "yes" ...
## $ internet : chr "no" "yes" "yes" "yes" ...
## $ guardian : chr "mother" "father" "mother" "mother" ...
## $ traveltime: int 2 1 1 1 1 1 1 2 1 1 ...
## $ studytime : int 2 2 2 3 2 2 2 2 2 2 ...
## $ failures : int 0 0 2 0 0 0 0 0 0 0 ...
## $ schoolsup : chr "yes" "no" "yes" "no" ...
## $ famsup : chr "no" "yes" "no" "yes" ...
## $ paid : chr "no" "no" "yes" "yes" ...
## $ activities: chr "no" "no" "no" "yes" ...
## $ higher : chr "yes" "yes" "yes" "yes" ...
## $ romantic : chr "no" "no" "no" "yes" ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ freetime : int 3 3 3 2 3 4 4 1 2 5 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ Dalc : int 1 1 2 1 1 1 1 1 1 1 ...
## $ Walc : int 1 1 3 1 2 2 1 1 1 1 ...
## $ health : int 3 3 3 5 5 5 3 1 1 5 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ G1 : int 2 7 10 14 8 14 12 8 16 13 ...
## $ G2 : int 8 8 10 14 12 14 12 9 17 14 ...
## $ G3 : int 8 8 11 14 12 14 12 10 18 14 ...
## $ alc_use : num 1 1 2.5 1 1.5 1.5 1 1 1 1 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
summary(alc)
## school sex age address
## Length:382 Length:382 Min. :15.00 Length:382
## Class :character Class :character 1st Qu.:16.00 Class :character
## Mode :character Mode :character Median :17.00 Mode :character
## Mean :16.59
## 3rd Qu.:17.00
## Max. :22.00
## famsize Pstatus Medu Fedu
## Length:382 Length:382 Min. :0.000 Min. :0.000
## Class :character Class :character 1st Qu.:2.000 1st Qu.:2.000
## Mode :character Mode :character Median :3.000 Median :3.000
## Mean :2.806 Mean :2.565
## 3rd Qu.:4.000 3rd Qu.:4.000
## Max. :4.000 Max. :4.000
## Mjob Fjob reason nursery
## Length:382 Length:382 Length:382 Length:382
## Class :character Class :character Class :character Class :character
## Mode :character Mode :character Mode :character Mode :character
##
##
##
## internet guardian traveltime studytime
## Length:382 Length:382 Min. :1.000 Min. :1.000
## Class :character Class :character 1st Qu.:1.000 1st Qu.:1.000
## Mode :character Mode :character Median :1.000 Median :2.000
## Mean :1.448 Mean :2.037
## 3rd Qu.:2.000 3rd Qu.:2.000
## Max. :4.000 Max. :4.000
## failures schoolsup famsup paid
## Min. :0.0000 Length:382 Length:382 Length:382
## 1st Qu.:0.0000 Class :character Class :character Class :character
## Median :0.0000 Mode :character Mode :character Mode :character
## Mean :0.2016
## 3rd Qu.:0.0000
## Max. :3.0000
## activities higher romantic famrel
## Length:382 Length:382 Length:382 Min. :1.000
## Class :character Class :character Class :character 1st Qu.:4.000
## Mode :character Mode :character Mode :character Median :4.000
## Mean :3.937
## 3rd Qu.:5.000
## Max. :5.000
## freetime goout Dalc Walc health
## Min. :1.00 Min. :1.000 Min. :1.000 Min. :1.000 Min. :1.000
## 1st Qu.:3.00 1st Qu.:2.000 1st Qu.:1.000 1st Qu.:1.000 1st Qu.:3.000
## Median :3.00 Median :3.000 Median :1.000 Median :2.000 Median :4.000
## Mean :3.22 Mean :3.113 Mean :1.482 Mean :2.296 Mean :3.573
## 3rd Qu.:4.00 3rd Qu.:4.000 3rd Qu.:2.000 3rd Qu.:3.000 3rd Qu.:5.000
## Max. :5.00 Max. :5.000 Max. :5.000 Max. :5.000 Max. :5.000
## absences G1 G2 G3 alc_use
## Min. : 0.0 Min. : 2.00 Min. : 4.00 Min. : 0.00 Min. :1.000
## 1st Qu.: 1.0 1st Qu.:10.00 1st Qu.:10.00 1st Qu.:10.00 1st Qu.:1.000
## Median : 3.0 Median :12.00 Median :12.00 Median :12.00 Median :1.500
## Mean : 4.5 Mean :11.49 Mean :11.47 Mean :11.46 Mean :1.889
## 3rd Qu.: 6.0 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:2.500
## Max. :45.0 Max. :18.00 Max. :18.00 Max. :18.00 Max. :5.000
## high_use
## Mode :logical
## FALSE:268
## TRUE :114
##
##
##
My assumption is that going out (goout), and absences (absences) increase the consumption of alcohol whereas the more time spent on studies (studytime) and other activities (activities) the lower the consumption is.
library(tidyr); library(dplyr); library(ggplot2)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
glimpse(alc)
## Rows: 382
## Columns: 35
## $ school <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "G...
## $ sex <chr> "F", "F", "F", "F", "F", "M", "M", "F", "M", "M", "F", "...
## $ age <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15, 15, ...
## $ address <chr> "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "...
## $ famsize <chr> "GT3", "GT3", "LE3", "GT3", "GT3", "LE3", "LE3", "GT3", ...
## $ Pstatus <chr> "A", "T", "T", "T", "T", "T", "T", "A", "A", "T", "T", "...
## $ Medu <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, 3, 3,...
## $ Fedu <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, 3, 2,...
## $ Mjob <chr> "at_home", "at_home", "at_home", "health", "other", "ser...
## $ Fjob <chr> "teacher", "other", "other", "services", "other", "other...
## $ reason <chr> "course", "course", "other", "home", "home", "reputation...
## $ nursery <chr> "yes", "no", "yes", "yes", "yes", "yes", "yes", "yes", "...
## $ internet <chr> "no", "yes", "yes", "yes", "no", "yes", "yes", "no", "ye...
## $ guardian <chr> "mother", "father", "mother", "mother", "father", "mothe...
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 3, 1,...
## $ studytime <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, 2, 1,...
## $ failures <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,...
## $ schoolsup <chr> "yes", "no", "yes", "no", "no", "no", "no", "yes", "no",...
## $ famsup <chr> "no", "yes", "no", "yes", "yes", "yes", "no", "yes", "ye...
## $ paid <chr> "no", "no", "yes", "yes", "yes", "yes", "no", "no", "yes...
## $ activities <chr> "no", "no", "no", "yes", "no", "yes", "no", "no", "no", ...
## $ higher <chr> "yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", ...
## $ romantic <chr> "no", "no", "no", "yes", "no", "no", "no", "no", "no", "...
## $ famrel <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, 5, 5,...
## $ freetime <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, 3, 5,...
## $ goout <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, 2, 5,...
## $ Dalc <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2,...
## $ Walc <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, 1, 4,...
## $ health <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, 4, 5,...
## $ absences <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, 3, 9,...
## $ G1 <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11, 14, ...
## $ G2 <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11, 15,...
## $ G3 <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 12, 16...
## $ alc_use <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1.5, 1...
## $ high_use <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, F...
gather(alc) %>% glimpse
## Rows: 13,370
## Columns: 2
## $ key <chr> "school", "school", "school", "school", "school", "school", "...
## $ value <chr> "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "GP", "...
g <- gather(alc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free")
g + geom_bar()
Key-value pairs of the data.
my_model <- lm(high_use ~ goout + absences + studytime + activities, data = alc )
summary(my_model)
##
## Call:
## lm(formula = high_use ~ goout + absences + studytime + activities,
## data = alc)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.8109 -0.3015 -0.1403 0.3580 1.0860
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.023625 0.086675 0.273 0.785336
## goout 0.134130 0.019165 6.999 1.19e-11 ***
## absences 0.013991 0.003939 3.552 0.000430 ***
## studytime -0.088581 0.025683 -3.449 0.000626 ***
## activitiesyes -0.047960 0.042727 -1.122 0.262380
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4146 on 377 degrees of freedom
## Multiple R-squared: 0.1897, Adjusted R-squared: 0.1811
## F-statistic: 22.06 on 4 and 377 DF, p-value: 2.228e-16
The t-value measures the size of the difference relative to the variation, so the bigger the number the greater the evidence against the null hypothesis. Goout, absences, and studytime have t-value great enough to refute the 0-hypotheses. p-value (Pr) is less than 0.05 for all those three. Based on the results, null hypotheses can be refute for goout, absence, and studytime. It cannot be refute for activities. We will build the model without activities. As expected it looks like goout and absences increse the alcohol consumption and studytime degreses it. Activities do not seen to have a clear affect.
my_model2 <- lm(high_use ~ goout + absences + studytime, data = alc )
summary(my_model2)
##
## Call:
## lm(formula = high_use ~ goout + absences + studytime, data = alc)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.7837 -0.2938 -0.1357 0.3622 1.0642
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.007067 0.085440 0.083 0.934125
## goout 0.133271 0.019156 6.957 1.54e-11 ***
## absences 0.013966 0.003940 3.545 0.000442 ***
## studytime -0.091472 0.025563 -3.578 0.000391 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.4148 on 378 degrees of freedom
## Multiple R-squared: 0.187, Adjusted R-squared: 0.1805
## F-statistic: 28.97 on 3 and 378 DF, p-value: < 2.2e-16
alcohol consumption = 0.01 + 0.13 * goout + 0.01 * absences - 0.10 * studytime
Residual Standard Error: Standard deviation of residuals / errors of the regression model. Multiple R-Squared: Percent of the variance of exam intact after subtracting the error of the model. Adjusted R-Squared: how well the model fits the data, i.e. the percentage of the dependent variable variation that the linear model explains (ranging between 0 and 1). The R-squared is quite low so there is probably something in residual plots we should investigate.
par(mfrow = c(2,2))
plot(my_model2, which=c(1,2,5))
The residual vs Fitted shows that the residuals are not at all on the regression line. QQ-plot shows that the datapoints really do not follow the regression line well. Residulas vs leverage plot shows most of the points in the beginning of the line. Most likely the model is not linear. (There are no points outside the Cook’s distance, so no big outliers.)
# grouping the data by goout, absences and studytime. counting the count and the mean of alc_use.
alc %>% group_by(goout, absences, studytime) %>% summarise(count = n(), mean_grade=mean(high_use))
## `summarise()` regrouping output by 'goout', 'absences' (override with `.groups` argument)
## # A tibble: 165 x 5
## # Groups: goout, absences [77]
## goout absences studytime count mean_grade
## <int> <int> <int> <int> <dbl>
## 1 1 0 1 4 0
## 2 1 0 2 2 0
## 3 1 1 1 1 0
## 4 1 1 2 3 0
## 5 1 1 4 1 0
## 6 1 2 1 2 0.5
## 7 1 2 2 1 0
## 8 1 3 2 1 0
## 9 1 5 3 1 1
## 10 1 8 1 1 0
## # ... with 155 more rows
Those with 0 or 1 as mean_grade are low and high in comsumption but other values have variance. For example if we see a student with go out=5, absences=19, and studytime=2, the data shows high consumption. But a student with the same go out and studytime but even more absence (21) shows low consumptiton. Since there are no outliers this must be a true data point and the regression is not linear.
library(ggplot2)
g1 <- ggplot(alc, aes(x = high_use, y = goout))
g1 + geom_boxplot() + ylab("go out")
Base on the box plot is shows that high_use and going out a lot have a correlation.
g2 <- ggplot(alc, aes(x = high_use, y = absences))
g2 + geom_boxplot() + ylab("absences")
Based on the box plot it loos like more absences means more alcohol consumption. There are some exceptions though.
g3 <- ggplot(alc, aes(x = high_use, y = studytime))
g3 + geom_boxplot() + ylab("study time")
Base on this box plot the more students spent time on studying the less they consume alcohol. Let’s build a logistic model (my_model3).
my_model3 <- glm(high_use ~ goout + absences + studytime, data = alc, family = "binomial")
summary(my_model3)
##
## Call:
## glm(formula = high_use ~ goout + absences + studytime, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.8457 -0.7733 -0.5178 0.8432 2.5036
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.48582 0.52982 -4.692 2.71e-06 ***
## goout 0.72735 0.11786 6.171 6.78e-10 ***
## absences 0.07011 0.02204 3.181 0.001470 **
## studytime -0.56048 0.16672 -3.362 0.000774 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 390.14 on 378 degrees of freedom
## AIC: 398.14
##
## Number of Fisher Scoring iterations: 4
Let’s see the coefficients of the model.
coef(my_model3)
## (Intercept) goout absences studytime
## -2.48582049 0.72734718 0.07011218 -0.56048258
goout nad studytime has stronger coeffience on high_use than absences.
# compute odds ratios (OR)
OR <- coef(my_model3) %>% exp
# compute confidence intervals (CI)
CI <- confint(my_model3) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.08325721 0.02864231 0.2297441
## goout 2.06958310 1.65203749 2.6250666
## absences 1.07262851 1.02840235 1.1225709
## studytime 0.57093348 0.40733791 0.7846264
Odds Ratio is a measure of the strength of association with an exposure and an outcome. OR > 1 means greater odds of association with the exposure and outcome, the X is positively associated with “success” in our case high consumption of alcohol. Goout clearly has great odds, absences not that clear (1.07 > 1) but still has, and studytime (<1) means lower odds of association between the exposure and outcome. Confidence intervals (2.5 and 97.5) shows the confidence of odds ratio.
# predict() the probability of high_use
probabilities <- predict(my_model3, type = "response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# see the first ten original classes, predicted probabilities, and class predictions
select(alc, failures, absences, sex, high_use, probability, prediction) %>% head(10)
## failures absences sex high_use probability prediction
## 1 0 5 F FALSE 0.41414989 FALSE
## 2 0 3 F FALSE 0.22892212 FALSE
## 3 2 8 F TRUE 0.16921600 FALSE
## 4 0 1 F FALSE 0.06645515 FALSE
## 5 0 2 F FALSE 0.11796259 FALSE
## 6 0 8 M FALSE 0.16921600 FALSE
## 7 0 0 M FALSE 0.33238962 FALSE
## 8 0 4 F FALSE 0.39724726 FALSE
## 9 0 0 M FALSE 0.10413596 FALSE
## 10 0 0 M FALSE 0.05317940 FALSE
This shows the prediction and the probability to that prediction. Prediction is compared to the true value (high_use) to see how good it is.
# create the confusion matrix, tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 246 22
## TRUE 66 48
The number of correct predictions for false (true negatives) is 246 and the incorrect (false positives) is 22. The number of correct predictions for true (true positives) is 48 and the incorrect (false negatives) is 66. The model can predict students that do not consume high amount of alcohol quite well but it cannot predict those consuming a lot as well.
# initialize a plot of 'high_use' versus 'probability' in 'alc'
g11 <- ggplot(alc, aes(x = probability, y = high_use ))
g11 + geom_point(aes(col = prediction)) + ylab("high use")
# confusion matrix with probabilities
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.64397906 0.05759162 0.70157068
## TRUE 0.17277487 0.12565445 0.29842932
## Sum 0.81675393 0.18324607 1.00000000
This proves the analyses made earlier: the prediction for false sís much better than the one for true.
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = 0)
## [1] 0.2984293
The probability for a wrong prediction is about 30%.
loss_func(class = alc$high_use, prob = 1)
## [1] 0.7015707
And the probability for a correct prediction is about 70%.
# probability based on the column probability
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2303665
explain…
# 10-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model3, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2408377
The error rate is a little bit better (0.24) than the one in DataCamp (0.26).
# 10-fold cross-validation for different models
# "school","sex","age","address","famsize","Pstatus","Medu","Fedu","Mjob","Fjob","reason","nursery","internet",
# "guardian","traveltime","studytime","failures","schoolsup","famsup","paid","activities","higher","romantic",
# "famrel","freetime","goout","Dalc","Walc","health","absences","G1","G2","G3","alc_use","high_use"
my_model4 <- glm(high_use ~ school + sex + age + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model4, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2539267
Using a model with many predictors is not useful since the error rate is higher than for the model with less predictors.
my_model5 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model5, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2382199
The error rate gets smaller when reducing the predictors (those that have no correlation to high_usage).
my_model6 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + activities + higher + romantic + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model6, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2172775
The error rate gets smaller when reducing the predictors (those that have no correlation to high_usage).
# the Boston data from the MASS package
# access the MASS package
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The dataset is Housing Values in Suburbs of Boston This data frame contains the following columns: crim, per capita crime rate by town. zn, proportion of residential land zoned for lots over 25,000 sq.ft. indus, proportion of non-retail business acres per town. chas, Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). nox, nitrogen oxides concentration (parts per 10 million). rm, average number of rooms per dwelling. age, proportion of owner-occupied units built prior to 1940. dis, weighted mean of distances to five Boston employment centres. rad, index of accessibility to radial highways. tax, full-value property-tax rate per $10,000. ptratio, pupil-teacher ratio by town. black, 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. lstat, lower status of the population (percent). medv, median value of owner-occupied homes in $1000s.
chas and rad are of type integer, the rest of the variables are of type number.
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
summary shows the min, max, and the first, the second (meadian), and the third quantum of each variable of the dataset.
dim(Boston)
## [1] 506 14
The dataset has 506 rows and 14 columns.
# plot matrix of the variables
pairs(Boston[-1])
Nox and dis, rm and lstat, rm and medv, lstat and medv, have some kind of linear pattern.
library(corrplot)
## corrplot 0.84 loaded
library(tidyverse)
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v tibble 3.0.4 v stringr 1.4.0
## v readr 1.4.0 v forcats 0.5.0
## v purrr 0.3.4
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
## x MASS::select() masks dplyr::select()
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston)
# print the correlation matrix
corrplot(cor_matrix, method="circle")
crim correlates strongly with rad and tax, zn with dis, indus with nox, age, rad, tax, lstat and dis, nox with indus, age, rad, tax, lstst and dis, rm with medv, age with indus, nox, lstat and lstat, dis with zn, indus, nox and age, rad with crim, indus, nox and especially tax, tax with crim, indus, nox, lstat and especially rad, lstat with indus, rm, nox, age, medv, medv with rm and lstat.
library(GGally)
library(ggplot2)
p <- ggpairs(Boston, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p
Only rm looks like it’s almost normally distributed. The data needs to be scaled.
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
The scale (min and max) has changed for all the variables.
# change the object to data frame so that it will be easier to use the data
boston_scaled <- as.data.frame(boston_scaled)
class(boston_scaled)
## [1] "data.frame"
Our next job is to create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate) using quantiles as the break points.
# summary of the scaled crime rate
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
The min value is -0.42 and the max value is 9.92. The 1. quantile is -0.41, the second is -0.39 and the third is 0.007.
# create a quantile vector of crim
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
These would be the limits for each category.
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE)
# look at the table of the new factor crime
table(crime)
## crime
## [-0.419,-0.411] (-0.411,-0.39] (-0.39,0.00739] (0.00739,9.92]
## 127 126 126 127
127 values have been assigned to first and last category, 126 to the second and third. Values between -0.419 and -0.411 are in category one. Values between -0.411 and -0.39 are in category two. Values between -0.39 and 0.00739 are in category three. Values between 0.00739 and 9.92 are in category four. Let’s lable those categories with labels low, med_low, med_high, and high.
crime <- cut(boston_scaled$crim, breaks = bins, labels=c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
Now the categories have names. Next we can remove the original variable (crim) from the scaled dataset.
boston_scaled <- dplyr::select(boston_scaled, -crim)
colnames(boston_scaled)
## [1] "zn" "indus" "chas" "nox" "rm" "age" "dis"
## [8] "rad" "tax" "ptratio" "black" "lstat" "medv"
And then we can add the new categorized variable (crime) to the dataset.
boston_scaled <- data.frame(boston_scaled, crime)
summary(boston_scaled)
## zn indus chas nox
## Min. :-0.48724 Min. :-1.5563 Min. :-0.2723 Min. :-1.4644
## 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723 1st Qu.:-0.9121
## Median :-0.48724 Median :-0.2109 Median :-0.2723 Median :-0.1441
## Mean : 0.00000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723 3rd Qu.: 0.5981
## Max. : 3.80047 Max. : 2.4202 Max. : 3.6648 Max. : 2.7296
## rm age dis rad
## Min. :-3.8764 Min. :-2.3331 Min. :-1.2658 Min. :-0.9819
## 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049 1st Qu.:-0.6373
## Median :-0.1084 Median : 0.3171 Median :-0.2790 Median :-0.5225
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617 3rd Qu.: 1.6596
## Max. : 3.5515 Max. : 1.1164 Max. : 3.9566 Max. : 1.6596
## tax ptratio black lstat
## Min. :-1.3127 Min. :-2.7047 Min. :-3.9033 Min. :-1.5296
## 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049 1st Qu.:-0.7986
## Median :-0.4642 Median : 0.2746 Median : 0.3808 Median :-0.1811
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332 3rd Qu.: 0.6024
## Max. : 1.7964 Max. : 1.6372 Max. : 0.4406 Max. : 3.5453
## medv crime
## Min. :-1.9063 low :127
## 1st Qu.:-0.5989 med_low :126
## Median :-0.1449 med_high:126
## Mean : 0.0000 high :127
## 3rd Qu.: 0.2683
## Max. : 2.9865
Now the data is ready and we can start working with it. First we divide the data into training (80%) and testing (20%) sets.
# number of rows in the Boston dataset
n <- 506
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set from that 80%
train <- boston_scaled[ind,]
# create test set from the remaining data
test <- boston_scaled[-ind,]
train dataset has 404 rows and 14 columns. test dataset has 102 rows and 14 columns. Let’s train a Linear Discriminant analysis (LDA) classification model. Crime is the target variable.
lda.fit <- lda(crime ~ . , data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2400990 0.2450495 0.2450495 0.2698020
##
## Group means:
## zn indus chas nox rm age
## low 1.03556121 -0.8845650 -0.06938576 -0.8766462 0.45326065 -0.8695177
## med_low -0.08900254 -0.2738927 -0.03371693 -0.5530833 -0.16772507 -0.3020433
## med_high -0.39109146 0.2207368 0.28443258 0.3949819 0.02188349 0.4236800
## high -0.48724019 1.0149946 -0.01948777 1.0686086 -0.33390427 0.8161185
## dis rad tax ptratio black lstat
## low 0.8223788 -0.6882425 -0.7339693 -0.41279465 0.38257128 -0.76942434
## med_low 0.3354408 -0.5433656 -0.4459934 -0.02098556 0.30389828 -0.09435156
## med_high -0.4198340 -0.4296790 -0.3158180 -0.32612305 0.04106694 0.03902136
## high -0.8521114 1.6596029 1.5294129 0.80577843 -0.79474706 0.83158570
## medv
## low 0.50836014
## med_low -0.02849802
## med_high 0.13558525
## high -0.66781680
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.13607342 0.76782465 -0.947209410
## indus 0.07623776 -0.24193898 0.121184606
## chas -0.06791378 -0.07089134 0.006942821
## nox 0.29642819 -0.68581522 -1.249987015
## rm -0.15695216 0.04957761 -0.149202626
## age 0.24856096 -0.22309432 -0.099594965
## dis -0.07806083 -0.22315939 0.330077499
## rad 3.76369863 0.87537312 -0.232739216
## tax -0.02854150 0.02094946 0.838061221
## ptratio 0.14985184 0.07585265 -0.153509685
## black -0.12821253 0.02684136 0.112787310
## lstat 0.12184371 -0.32946293 0.376646099
## medv 0.15431985 -0.44975617 -0.066386379
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9594 0.0311 0.0095
Prior probabilities of groups: the proportion of training observations in each group. Prior probabilities of groups: low med_low med_high high 0.2301980 0.2475248 0.2549505 0.2673267
The observations are quite equalli distributed to all the groups (all in the range of 23%-27%).
Group means: group center of gravity, the mean of each variable in each group.
Coefficients of linear discriminants: the linear combination of predictor variables that are used to form the LDA decision rule. For example LD1 = 0.13zn + 0.04indus - 0.11chas + 0.37nox - 0.16rm + 0.22age - 0.08dis + 3.42rad + 0.01tax + 0.11ptratio - 0.12black + 0.17lstat + 0.16*medv Proportion of trace is the percentage separation achieved by each discriminant function: LD1 LD2 LD3 0.9576 0.0328 0.0096
LD1 seems to be 95.76% whereas the other LDs are not very high.
Let’s define the arrows, create a numeric vector of the train sets crime classes, and draw a biplot
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
The colour indicates each category. Let’s add the arrows we specified earlier.
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 5)
Next we will take the crime classes from the test and save them as correct_classes (so that we can compare to it when testing) and remove the crime variable from the test dataset so that we can predict is using the model we will build.
correct_classes <- test$crime
class(correct_classes)
## [1] "factor"
test <- dplyr::select(test, -crime)
colnames(test)
## [1] "zn" "indus" "chas" "nox" "rm" "age" "dis"
## [8] "rad" "tax" "ptratio" "black" "lstat" "medv"
There is no longer crime variable in the test dataset. Let’s use the model and predict using the test dataset. Then we compare the predictions to the correct_classes.
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 17 11 2 0
## med_low 5 19 3 0
## med_high 0 10 15 2
## high 0 0 1 17
For the high category the model made excellent predictions, 19/19. For med_high 12/23, for med_low 17/26, and for low 25/34 was correctly predicted.
Clustering
# load the Boston dataset, scale it and create the euclidean distance matrix
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled, method = "euclidean", diag = FALSE, upper = FALSE, p = 4)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
euclidean distance is a usual distance between the two vectors
Let’s calculate the manhattan distance.
dist_man <- dist(boston_scaled, method = "manhattan", diag = FALSE, upper = FALSE, p = 4)
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
manhattan distance is an absolute distance between the two vectors
K-means clustering
km <-kmeans(boston_scaled, centers = 4)
pairs(boston_scaled, col = km$cluster)
Above we can see K-means clustering using 4 clusters, each identified by a different color.
What is the best k, number of clusters? One way to determine k is to look at how the total of within cluster sum of squares (WCSS) behaves when the number of cluster changes. When you plot the number of clusters and the total WCSS, the optimal number of clusters is when the total WCSS drops radically. Note that K-means randomly assigns the initial cluster centers and therefore might produce different results every time.
set.seed(900)
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')
It looks like 2 is the optimal number of clusters since the curve changes dramatically on k=2.
Let’s create k-means using 2 as number of clusters.
km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)
med and rm, rm and lstat, rm and medv are the only ones having linear pattern. medv and lstat, dis and nox have a curved, non-linear pattern.
Bonus.
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
boston_scaled <- dplyr::select(boston_scaled, -crim)
n <- 506
ind <- sample(n, size = n * 0.8)
ktrain <- boston_scaled[ind,]
ktest <- boston_scaled[-ind,]
km <-kmeans(ktrain, centers = 4)
#length(km)
lda.fit <- lda(km$cluster ~ . , data = ktrain)
lda.fit
## Call:
## lda(km$cluster ~ ., data = ktrain)
##
## Prior probabilities of groups:
## 1 2 3 4
## 0.1064356 0.3143564 0.4133663 0.1658416
##
## Group means:
## zn indus chas nox rm age
## 1 -0.02556311 -0.4214017 1.65044081 -0.06341642 1.3347678 0.2238756
## 2 -0.48724019 1.1535174 -0.08632433 1.13408537 -0.4174781 0.8283821
## 3 -0.35206167 -0.4075331 -0.27232907 -0.42141667 -0.2310348 -0.1412526
## 4 1.77276888 -1.0794322 -0.27232907 -1.12647984 0.5809091 -1.4036878
## dis rad tax ptratio black lstat medv
## 1 -0.3483776 -0.3942834 -0.6093748 -1.02573014 0.2939814 -0.7238248 1.3805896
## 2 -0.8624951 1.1061684 1.2066260 0.60355843 -0.5855684 0.8635993 -0.7237526
## 3 0.1691226 -0.6043213 -0.6192280 0.05262372 0.3101287 -0.1548870 -0.1081300
## 4 1.4940692 -0.6064768 -0.5669409 -0.61647652 0.3518842 -0.8690652 0.6220355
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.003479948 -1.311689426 -0.761115369
## indus 0.936737602 -0.407503993 -0.181794321
## chas -0.167644631 0.631026345 -0.770943356
## nox 0.896989707 -0.452138083 -0.272352528
## rm -0.034025553 0.165801674 -0.615581321
## age -0.044459412 0.599126833 0.012642565
## dis -0.088521463 -0.629471813 0.005214464
## rad 0.642699177 0.117578513 -0.364357886
## tax 0.422662032 -0.667438098 -0.131882972
## ptratio 0.265080739 -0.157872219 0.136575290
## black -0.056390985 -0.002398193 0.054300281
## lstat 0.311829110 0.026941745 -0.480960215
## medv 0.064842317 0.292044772 -0.831575220
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.6545 0.2024 0.1431
Prior probabilities of groups: the proportion of training observations in each group. Prior probabilities of groups: 1 2 3 4 0.09405941 0.40346535 0.16089109 0.34158416 For example 40% of the observations belong to group 2. Group means: group center of gravity, the mean of each variable in each group. Coefficients of linear discriminants: the linear combination of predictor variables that are used to form the LDA decision rule. For example LD1 = -0.13zn + 0.80indus - 0.15chas + 0.96nox + 0.09rm - 0.15age - 0.08dis + 0.58rad + 0.56tax + 0.22ptratio + 0.01black + 0.26lstat - 0.31*medv Proportion of trace is the percentage separation achieved by each discriminant function: LD1 LD2 LD3 0.6937 0.2138 0.0925 0.6937 + 0.2138 + 0.0925 = 1
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
Super-Bonus
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
# 3D plot by crime (test)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= train$crime)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
# 3D plot by k means cluster
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= km$cluster)
The plots (coloring) are very different but the shape is same because the datapoints are the same. The first plot shows the level of crimes and the second shows those datapoints as on what cluster they belong to.
human <- read.csv("C:/Users/Heli/Heli/HY/Introduction to Open Data Science/Projects/IODS-project/data\\human2.csv", sep=",", dec = ".", header=TRUE)
human
## X edu2F labM LifeExpect ExpectYrsEd GNIperCapita MMRatio BirthRate
## 1 1 5.9 79.5 60.4 9.3 1885 400 86.8
## 2 2 81.8 65.5 77.8 11.8 9943 21 15.3
## 3 3 26.7 72.2 74.8 14.0 13054 89 10.0
## 4 4 56.3 75.0 76.3 17.9 22050 69 54.4
## 5 5 94.0 72.6 74.7 12.3 8124 29 27.1
## 6 6 94.3 71.8 82.4 20.2 42261 6 12.1
## 7 7 100.0 67.7 81.4 15.7 43869 4 4.1
## 8 8 93.7 69.6 70.8 11.9 16428 26 40.0
## 9 9 91.2 79.3 75.4 12.6 21336 37 28.5
## 10 10 56.7 86.9 76.6 14.4 38599 22 13.8
## 11 11 34.1 84.1 71.6 10.0 3191 170 80.6
## 12 12 89.5 76.6 75.6 15.4 12488 52 48.4
## 13 13 87.0 63.1 71.3 15.7 16676 1 20.6
## 14 14 77.5 59.3 80.8 16.3 41187 6 6.7
## 15 15 76.4 82.3 70.0 13.6 7614 45 71.4
## 16 16 11.3 78.3 59.6 11.1 1767 340 90.2
## 17 17 34.0 77.2 69.5 12.6 7176 120 40.9
## 18 18 47.6 80.9 68.3 13.2 5760 200 71.9
## 19 19 44.9 57.3 76.5 13.6 9638 8 15.1
## 20 20 73.6 81.6 64.5 12.5 16646 170 44.2
## 21 21 54.6 80.8 74.5 15.2 15175 69 70.8
## 22 22 93.0 59.0 74.2 14.4 15596 5 35.9
## 23 23 0.9 90.0 58.7 7.8 1591 400 115.4
## 24 24 5.3 82.0 56.7 10.1 758 740 30.3
## 25 25 14.0 81.4 51.5 8.9 3171 720 130.3
## 26 26 9.9 86.5 68.4 10.9 2949 170 44.3
## 27 27 21.3 76.8 55.5 10.4 2803 590 115.8
## 28 28 100.0 71.0 82.0 15.9 42155 11 14.5
## 29 29 10.1 85.1 50.7 7.2 581 880 98.3
## 30 30 1.7 79.2 51.6 7.4 2085 980 152.0
## 31 31 73.3 74.8 81.7 15.2 21290 22 55.3
## 32 32 58.7 78.3 75.8 13.1 12547 32 8.6
## 33 33 56.9 79.7 74.0 13.5 12040 83 68.5
## 34 34 39.7 73.0 62.3 11.1 6012 410 126.7
## 35 35 12.8 73.2 58.7 9.8 680 730 135.3
## 36 36 50.7 79.0 79.4 13.9 13413 38 60.8
## 37 37 85.0 58.4 77.3 14.8 19409 13 12.7
## 38 38 74.3 70.0 79.4 13.8 7301 80 43.1
## 39 39 76.0 71.1 80.2 14.0 28633 10 5.5
## 40 40 99.9 68.3 78.6 16.4 26660 5 4.9
## 41 41 95.5 66.4 80.2 18.7 44025 5 5.1
## 42 42 55.6 78.6 73.5 13.1 11883 100 99.6
## 43 43 40.1 82.7 75.9 14.2 10605 87 77.0
## 44 44 43.9 74.8 71.1 13.5 10512 45 43.0
## 45 45 36.8 79.0 73.0 12.3 7349 69 76.0
## 46 46 100.0 68.9 76.8 16.5 25214 11 16.8
## 47 47 7.8 89.3 64.1 8.5 1428 420 78.4
## 48 48 64.2 72.0 70.0 15.7 7493 59 42.8
## 49 49 100.0 64.0 80.8 17.1 38695 4 9.2
## 50 50 78.0 61.6 82.2 16.0 38056 12 5.7
## 51 51 53.9 65.4 64.4 12.5 16367 240 103.0
## 52 52 17.4 82.9 60.2 8.8 1507 430 115.8
## 53 53 89.7 75.1 74.9 13.8 7164 41 46.8
## 54 54 96.3 66.4 80.9 16.5 43919 7 3.8
## 55 55 45.2 71.4 61.4 11.5 3852 380 58.4
## 56 56 59.5 62.5 80.9 17.6 24524 5 11.9
## 57 57 21.9 88.2 71.8 10.7 6929 140 97.2
## 58 58 60.3 80.5 66.4 10.3 6522 250 88.5
## 59 59 22.4 71.0 62.8 8.7 1669 380 42.0
## 60 60 28.0 82.9 73.1 11.1 3938 120 84.0
## 61 61 97.9 60.0 75.2 15.4 22916 14 12.1
## 62 62 91.0 77.4 82.6 19.0 35182 4 11.5
## 63 63 27.0 79.9 68.0 11.7 5497 190 32.8
## 64 64 39.9 84.2 68.9 13.0 9788 190 48.3
## 65 65 62.2 73.6 75.4 15.1 15440 23 31.6
## 66 66 27.8 69.8 69.4 10.1 14003 67 68.7
## 67 67 80.5 68.1 80.9 18.6 39568 9 8.2
## 68 68 84.4 69.1 82.4 16.0 30676 2 7.8
## 69 69 71.2 59.5 83.1 16.0 33030 4 4.0
## 70 70 74.0 70.9 75.7 12.4 7415 80 70.1
## 71 71 87.0 70.4 83.5 15.3 36927 6 5.4
## 72 72 69.5 66.6 74.0 13.5 11365 50 26.5
## 73 73 95.3 77.9 69.4 15.0 20867 26 29.9
## 74 74 25.3 72.4 61.6 11.0 2762 400 93.6
## 75 75 77.0 72.1 81.9 16.9 33890 27 2.2
## 76 76 55.6 83.1 74.4 14.7 83961 14 14.5
## 77 77 94.5 79.5 70.6 12.5 3044 75 29.3
## 78 78 98.9 67.6 74.2 15.2 22281 13 13.5
## 79 79 53.0 70.9 79.3 13.8 16509 16 12.0
## 80 80 21.9 73.5 49.8 11.1 3306 490 89.4
## 81 81 15.4 64.8 60.9 9.5 805 640 117.4
## 82 82 55.5 76.4 71.6 14.0 14911 15 2.5
## 83 83 89.1 67.3 73.3 16.4 24500 11 10.6
## 84 84 100.0 64.6 81.7 13.9 58711 11 8.3
## 85 85 11.1 81.5 62.8 10.8 747 510 144.8
## 86 86 65.1 75.5 74.7 12.7 22762 29 5.7
## 87 87 27.3 77.5 76.8 13.0 12328 31 4.2
## 88 88 7.7 81.4 58.0 8.4 1583 550 175.6
## 89 89 68.6 66.3 80.6 14.4 27930 9 18.2
## 90 90 8.3 79.1 63.1 8.5 3560 320 73.3
## 91 91 49.4 74.2 74.4 15.6 17470 73 30.9
## 92 92 55.7 79.9 76.8 13.1 16056 49 63.4
## 93 93 93.6 44.2 71.6 11.9 5223 21 29.3
## 94 94 85.3 69.3 69.4 14.6 10729 68 18.7
## 95 95 84.2 57.3 76.2 15.2 14558 7 15.2
## 96 96 20.7 75.8 74.0 11.6 6850 120 35.8
## 97 97 1.4 82.8 55.1 9.3 1123 480 137.8
## 98 98 22.9 82.3 65.9 8.6 4608 200 12.1
## 99 99 33.3 63.7 64.8 11.3 9418 130 54.9
## 100 100 17.7 87.1 69.6 12.4 2311 190 73.7
## 101 101 87.7 70.6 81.6 17.9 45435 6 6.2
## 102 102 95.0 73.8 81.8 19.2 32689 8 25.3
## 103 103 39.4 80.3 74.9 11.5 4457 100 100.8
## 104 104 2.4 89.7 61.4 5.4 908 630 204.8
## 105 105 97.4 68.7 81.6 17.5 64992 4 7.8
## 106 106 47.2 82.6 76.8 13.6 34858 11 10.6
## 107 107 19.3 82.9 66.2 7.8 4866 170 27.3
## 108 108 54.0 81.8 77.6 13.3 18192 85 78.5
## 109 109 7.6 74.0 62.6 9.9 2463 220 62.1
## 110 110 36.8 84.8 72.9 11.9 7643 110 67.0
## 111 111 56.3 84.4 74.6 13.1 11015 89 50.7
## 112 112 65.9 79.7 68.2 11.3 7915 120 46.8
## 113 113 79.4 64.9 77.4 15.5 23177 3 12.2
## 114 114 47.7 66.2 80.9 16.3 25757 8 12.6
## 115 115 66.7 95.5 78.2 13.8 123124 6 9.5
## 116 116 86.1 64.9 74.7 14.2 18108 33 31.0
## 117 117 89.6 71.7 70.1 14.7 22352 24 25.7
## 118 118 8.0 85.3 64.2 10.3 1458 320 33.6
## 119 119 64.3 58.4 73.4 12.9 5327 58 28.3
## 120 120 60.5 78.3 74.3 16.3 52821 16 10.2
## 121 121 7.2 88.0 66.5 7.9 2188 320 94.4
## 122 122 58.4 60.9 74.9 14.4 12190 16 16.9
## 123 123 10.0 69.0 50.9 8.6 1780 1100 100.7
## 124 124 74.1 77.2 83.0 15.4 76628 6 6.0
## 125 125 99.1 68.6 76.3 15.1 25845 7 15.9
## 126 126 95.8 63.2 80.4 16.8 27852 7 0.6
## 127 127 72.7 60.5 57.4 13.6 12122 140 50.9
## 128 128 66.8 65.8 82.6 17.3 32045 4 10.6
## 129 129 72.7 76.3 74.9 13.7 9779 29 16.9
## 130 130 12.1 76.0 63.5 7.0 3809 360 84.0
## 131 131 44.6 68.8 71.1 12.7 15617 130 35.2
## 132 132 21.9 71.6 49.0 11.3 5542 310 72.0
## 133 133 86.5 67.9 82.2 15.8 45636 4 6.5
## 134 134 95.0 74.9 83.0 15.8 56431 6 1.9
## 135 135 29.5 72.7 69.6 12.3 2728 49 41.6
## 136 136 95.1 77.1 69.4 11.2 2517 44 42.8
## 137 137 5.6 90.2 65.0 9.2 2411 410 122.7
## 138 138 35.7 80.7 74.4 13.5 13323 26 41.0
## 139 139 40.2 67.5 75.4 13.4 11780 7 18.3
## 140 140 16.1 81.3 59.7 12.2 1228 450 91.5
## 141 141 87.5 74.6 72.8 14.7 5069 120 18.1
## 142 142 59.7 75.5 70.4 12.3 26090 84 34.8
## 143 143 32.8 70.9 74.8 14.6 10404 46 4.6
## 144 144 39.0 70.8 75.3 14.5 18677 20 30.9
## 145 145 22.9 79.2 58.5 9.8 1613 360 126.6
## 146 146 91.7 66.9 71.0 15.1 8178 23 25.7
## 147 147 73.1 92.0 77.0 13.3 60868 8 27.6
## 148 148 99.8 68.7 80.7 16.2 39267 8 25.8
## 149 149 95.1 68.9 79.1 16.5 52947 28 31.0
## 150 150 54.4 76.8 77.2 15.5 19283 14 58.3
## 151 151 56.6 79.2 74.2 14.2 16159 110 83.2
## 152 152 59.4 82.2 75.8 11.9 5092 49 29.0
## 153 153 8.6 72.2 63.8 9.2 3519 270 47.0
## 154 154 25.8 85.6 60.1 13.5 3734 280 125.4
## 155 155 48.7 89.7 57.5 10.9 1615 470 60.3
## PercRepresinParliament
## 1 27.6
## 2 20.7
## 3 25.7
## 4 36.8
## 5 10.7
## 6 30.5
## 7 30.3
## 8 15.6
## 9 16.7
## 10 15.0
## 11 20.0
## 12 19.6
## 13 30.1
## 14 42.4
## 15 13.3
## 16 8.4
## 17 8.3
## 18 51.8
## 19 19.3
## 20 9.5
## 21 9.6
## 22 20.4
## 23 13.3
## 24 34.9
## 25 9.2
## 26 19.0
## 27 27.1
## 28 28.2
## 29 12.5
## 30 14.9
## 31 15.8
## 32 23.6
## 33 20.9
## 34 11.5
## 35 8.2
## 36 33.3
## 37 25.8
## 38 48.9
## 39 12.5
## 40 18.9
## 41 38.0
## 42 19.1
## 43 41.6
## 44 2.2
## 45 27.4
## 46 19.8
## 47 25.5
## 48 14.0
## 49 42.5
## 50 25.7
## 51 16.2
## 52 9.4
## 53 11.3
## 54 36.9
## 55 10.9
## 56 21.0
## 57 13.3
## 58 31.3
## 59 3.5
## 60 25.8
## 61 10.1
## 62 41.3
## 63 12.2
## 64 17.1
## 65 3.1
## 66 26.5
## 67 19.9
## 68 22.5
## 69 30.1
## 70 16.7
## 71 11.6
## 72 11.6
## 73 20.1
## 74 20.8
## 75 16.3
## 76 1.5
## 77 23.3
## 78 18.0
## 79 3.1
## 80 26.8
## 81 10.7
## 82 16.0
## 83 23.4
## 84 28.3
## 85 16.7
## 86 14.2
## 87 5.9
## 88 9.5
## 89 13.0
## 90 22.2
## 91 11.6
## 92 37.1
## 93 20.8
## 94 14.9
## 95 17.3
## 96 11.0
## 97 39.6
## 98 4.7
## 99 37.7
## 100 29.5
## 101 36.9
## 102 31.4
## 103 39.1
## 104 13.3
## 105 39.6
## 106 9.6
## 107 19.7
## 108 19.3
## 109 2.7
## 110 16.8
## 111 22.3
## 112 27.1
## 113 22.1
## 114 31.3
## 115 0.0
## 116 12.0
## 117 14.5
## 118 57.5
## 119 6.1
## 120 19.9
## 121 42.7
## 122 34.0
## 123 12.4
## 124 25.3
## 125 18.7
## 126 27.7
## 127 40.7
## 128 38.0
## 129 5.8
## 130 23.8
## 131 11.8
## 132 14.7
## 133 43.6
## 134 28.5
## 135 12.4
## 136 15.2
## 137 36.0
## 138 6.1
## 139 33.3
## 140 17.6
## 141 0.0
## 142 24.7
## 143 31.3
## 144 14.4
## 145 35.0
## 146 11.8
## 147 17.5
## 148 23.5
## 149 19.4
## 150 11.5
## 151 17.0
## 152 24.3
## 153 0.7
## 154 12.7
## 155 35.1
dim(human)
## [1] 155 9
Show a graphical overview of the data and show summaries of the variables in the data. Describe and interpret the outputs, commenting on the distributions of the variables and the relationships between them.
library(GGally)
library(ggplot2)
p <- ggpairs(human, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
p
Based on the summary data and the ggpairs we can see that ExpectYrsEdu is almost normally distributed, labM quite. The rest of the data is not. There is a good correlation on BirthRate and MMratio, ExpectYrsEdu and MMRatio, ExpectYrsEdu and BirthRate, LifeExpect and MMRatio, LifeExpect and edu2F, MMRatio and edu2F.
summary(human)
## X edu2F labM LifeExpect
## Min. : 1.0 Min. : 0.90 Min. :44.20 Min. :49.00
## 1st Qu.: 39.5 1st Qu.: 27.15 1st Qu.:68.70 1st Qu.:66.30
## Median : 78.0 Median : 56.60 Median :74.80 Median :74.20
## Mean : 78.0 Mean : 55.37 Mean :74.38 Mean :71.65
## 3rd Qu.:116.5 3rd Qu.: 85.15 3rd Qu.:80.60 3rd Qu.:77.25
## Max. :155.0 Max. :100.00 Max. :95.50 Max. :83.50
## ExpectYrsEd GNIperCapita MMRatio BirthRate
## Min. : 5.40 Min. : 581 Min. : 1.0 Min. : 0.60
## 1st Qu.:11.25 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65
## Median :13.50 Median : 12040 Median : 49.0 Median : 33.60
## Mean :13.18 Mean : 17628 Mean : 149.1 Mean : 47.16
## 3rd Qu.:15.20 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95
## Max. :20.20 Max. :123124 Max. :1100.0 Max. :204.80
## PercRepresinParliament
## Min. : 0.00
## 1st Qu.:12.40
## Median :19.30
## Mean :20.91
## 3rd Qu.:27.95
## Max. :57.50
pairs(human)
Similar interpretations can be made from the pairs as we did on the previous phase.
library(corrplot)
library(tidyverse)
# calculate the correlation matrix and round it
cor_matrix<-cor(human)
# print the correlation matrix
corrplot(cor_matrix, method="circle")
The correlations can be seen more clearly on a corrplot chart. A strong positive correlation can be seen between BirthRate and MMRatio, ExpectedYrsEd and edu2F, ExpectedYrsEd and LifeExpected, LifeExpect and edu2F, LifeExpect and ExpectYrsEdu. A strong negative correlation can be seen between BirthRate and edu2F, BirthRate and LifeExpectYrs, BirthRate and LifeExpectYrsEd, MMRatio and edu2F, MMRatio and LifeExpectYrs, MMRatio and LifeExpectYrsEdu.
Perform principal component analysis (PCA) on the not standardized human data. Show the variability captured by the principal components. Draw a biplot displaying the observations by the first two principal components.
pca_human <- prcomp(human)
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
This really does not tell us anything since all the arrows are a mess. We need to standardize the variables in the human data and repeat the above analysis.
human_std <- scale(human)
pca_human_std <- prcomp(human_std)
biplot(pca_human_std, choices = 1:2, cex = c(0.8, 1), col = c("grey40", "deeppink2"))
The same correlation as described earlier can be seen here. Arrows pointing to the same direction are positive correlation and the closer they are the stronger the correlation is. Arrows pointing to opposite directions identify negative correlation.
The angle between a variable and a PC axis can be interpret as the correlation between the two. The length of the arrows are proportional to the standard deviations of the variables
Create a summary of the PCA, rounded percentages of variance captured by each PC
s <- summary(pca_human_std)
pca_pr <- round(100*s$importance[2, ], digits = 5)
pca_pr
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9
## 52.527 11.767 10.894 9.632 5.616 3.553 2.628 2.175 1.208
PC1 captures 53% and PC2 12% of the variables.
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
biplot(pca_human_std, cex = c(0.8, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
This diagram shows the PC1 and PC2 with their importance.
Next we will load the tea dataset from the package FactomineR and explore the data briefly.
library(FactoMineR)
data("tea")
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
There are 300 rows and 36 variables in the tea dataset. Age is of type int but the rest of the variables are of type factor.
summary(tea)
## breakfast tea.time evening lunch
## breakfast :144 Not.tea time:131 evening :103 lunch : 44
## Not.breakfast:156 tea time :169 Not.evening:197 Not.lunch:256
##
##
##
##
##
## dinner always home work
## dinner : 21 always :103 home :291 Not.work:213
## Not.dinner:279 Not.always:197 Not.home: 9 work : 87
##
##
##
##
##
## tearoom friends resto pub
## Not.tearoom:242 friends :196 Not.resto:221 Not.pub:237
## tearoom : 58 Not.friends:104 resto : 79 pub : 63
##
##
##
##
##
## Tea How sugar how
## black : 74 alone:195 No.sugar:155 tea bag :170
## Earl Grey:193 lemon: 33 sugar :145 tea bag+unpackaged: 94
## green : 33 milk : 63 unpackaged : 36
## other: 9
##
##
##
## where price age sex
## chain store :192 p_branded : 95 Min. :15.00 F:178
## chain store+tea shop: 78 p_cheap : 7 1st Qu.:23.00 M:122
## tea shop : 30 p_private label: 21 Median :32.00
## p_unknown : 12 Mean :37.05
## p_upscale : 53 3rd Qu.:48.00
## p_variable :112 Max. :90.00
##
## SPC Sport age_Q frequency
## employee :59 Not.sportsman:121 15-24:92 1/day : 95
## middle :40 sportsman :179 25-34:69 1 to 2/week: 44
## non-worker :64 35-44:40 +2/day :127
## other worker:20 45-59:61 3 to 6/week: 34
## senior :35 +60 :38
## student :70
## workman :12
## escape.exoticism spirituality healthy
## escape-exoticism :142 Not.spirituality:206 healthy :210
## Not.escape-exoticism:158 spirituality : 94 Not.healthy: 90
##
##
##
##
##
## diuretic friendliness iron.absorption
## diuretic :174 friendliness :242 iron absorption : 31
## Not.diuretic:126 Not.friendliness: 58 Not.iron absorption:269
##
##
##
##
##
## feminine sophisticated slimming exciting
## feminine :129 Not.sophisticated: 85 No.slimming:255 exciting :116
## Not.feminine:171 sophisticated :215 slimming : 45 No.exciting:184
##
##
##
##
##
## relaxing effect.on.health
## No.relaxing:113 effect on health : 66
## relaxing :187 No.effect on health:234
##
##
##
##
##
There are many variables (36) which makes analyzing the data more difficult.
library(GGally)
library(ggplot2)
t <- ggpairs(tea, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
t
Obviously there are too many variables to make any sense of the data. We will choose some of them to keep, and we ignore the rest.
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- select(tea, one_of(keep_columns))
summary(tea_time)
## Tea How how sugar
## black : 74 alone:195 tea bag :170 No.sugar:155
## Earl Grey:193 lemon: 33 tea bag+unpackaged: 94 sugar :145
## green : 33 milk : 63 unpackaged : 36
## other: 9
## where lunch
## chain store :192 lunch : 44
## chain store+tea shop: 78 Not.lunch:256
## tea shop : 30
##
There are three different kinds of tea, you can drink it alone, with lemon, with milk, or with something else. It can be packed in a tea bag, unpackaged tea bag, or it can be unpackaged. The tea can be drunk with or without sugar. The tea can be drunk in a chain store, a tea shop or a combination of those two. It can be drunk combined with the lunch or separate. The most drank tea is Earl Grey, alone, in a tea bag, without suga, in a chain store not combined with a lunch.
library(tidyr); library(dplyr); library(ggplot2)
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped
This is a graphical way of showing the same interpretation. These graphs can be used to identify variable categories with a very low frequency. These types of variables can distort the analysis and should be removed.
Multiple Correspondence Analysis, a data analysis technique for nominal categorical data for detecting and representing underlying structures in a dataset. Data is represented as points in a low-dimensional Euclidean space. The graphs above can be used to identify variable categories with a very low frequency. These types of variables can distort the analysis and should be removed.
mca <- MCA(tea_time, graph = FALSE)
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7
## Variance 0.279 0.261 0.219 0.189 0.177 0.156 0.144
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519 7.841
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953 77.794
## Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.141 0.117 0.087 0.062
## % of var. 7.705 6.392 4.724 3.385
## Cumulative % of var. 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139 0.003
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626 0.027
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111 0.107
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841 0.127
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979 0.035
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990 0.020
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347 0.102
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459 0.161
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968 0.478
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898 0.141
## v.test Dim.3 ctr cos2 v.test
## black 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 2.867 | 0.433 9.160 0.338 10.053 |
## green -5.669 | -0.108 0.098 0.001 -0.659 |
## alone -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 3.226 | 1.329 14.771 0.218 8.081 |
## milk 2.422 | 0.013 0.003 0.000 0.116 |
## other 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
Eigenvalues, the percentage of variances explained by each principal component.
Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
Variance 0.279 0.261 0.219 0.189 0.177 0.156 0.144 0.141 0.117 0.087 0.062
% of var. 15.238 14.232 11.964 10.333 9.667 8.519 7.841 7.705 6.392 4.724 3.385
Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953 77.794 85.500 91.891 96.615 100.000
Dim1 explains 15%, Dim2 14%, Dim3 12% and so on. Dim1 to Dim4 together cover more than 50% of the variance.
plot(mca, invisible=c("ind"), habillage = "quali")
Different colors identify different variable categories, and the values of them are shown as values. The distance between any points gives a measure of their similarity (or dissimilarity). Points with similar profile are closed on the factor map. As analyzed earlier non lunch is is more similar than lunch, or chain store than tea shop.